38 research outputs found

    Global Stabilization of Triangular Systems with Time-Delayed Dynamic Input Perturbations

    Full text link
    A control design approach is developed for a general class of uncertain strict-feedback-like nonlinear systems with dynamic uncertain input nonlinearities with time delays. The system structure considered in this paper includes a nominal uncertain strict-feedback-like subsystem, the input signal to which is generated by an uncertain nonlinear input unmodeled dynamics that is driven by the entire system state (including unmeasured state variables) and is also allowed to depend on time delayed versions of the system state variable and control input signals. The system also includes additive uncertain nonlinear functions, coupled nonlinear appended dynamics, and uncertain dynamic input nonlinearities with time-varying uncertain time delays. The proposed control design approach provides a globally stabilizing delay-independent robust adaptive output-feedback dynamic controller based on a dual dynamic high-gain scaling based structure.Comment: 2017 IEEE International Carpathian Control Conference (ICCC

    High-Dimensional Controller Tuning through Latent Representations

    Full text link
    In this paper, we propose a method to automatically and efficiently tune high-dimensional vectors of controller parameters. The proposed method first learns a mapping from the high-dimensional controller parameter space to a lower dimensional space using a machine learning-based algorithm. This mapping is then utilized in an actor-critic framework using Bayesian optimization (BO). The proposed approach is applicable to complex systems (such as quadruped robots). In addition, the proposed approach also enables efficient generalization to different control tasks while also reducing the number of evaluations required while tuning the controller parameters. We evaluate our method on a legged locomotion application. We show the efficacy of the algorithm in tuning the high-dimensional controller parameters and also reducing the number of evaluations required for the tuning. Moreover, it is shown that the method is successful in generalizing to new tasks and is also transferable to other robot dynamics

    Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection

    Full text link
    This paper proposes a data-efficient detection method for deep neural networks against backdoor attacks under a black-box scenario. The proposed approach is motivated by the intuition that features corresponding to triggers have a higher influence in determining the backdoored network output than any other benign features. To quantitatively measure the effects of triggers and benign features on determining the backdoored network output, we introduce five metrics. To calculate the five-metric values for a given input, we first generate several synthetic samples by injecting the input's partial contents into clean validation samples. Then, the five metrics are computed by using the output labels of the corresponding synthetic samples. One contribution of this work is the use of a tiny clean validation dataset. Having the computed five metrics, five novelty detectors are trained from the validation dataset. A meta novelty detector fuses the output of the five trained novelty detectors to generate a meta confidence score. During online testing, our method determines if online samples are poisoned or not via assessing their meta confidence scores output by the meta novelty detector. We show the efficacy of our methodology through a broad range of backdoor attacks, including ablation studies and comparison to existing approaches. Our methodology is promising since the proposed five metrics quantify the inherent differences between clean and poisoned samples. Additionally, our detection method can be incrementally improved by appending more metrics that may be proposed to address future advanced attacks.Comment: Published in the IEEE Transactions on Information Forensics and Securit

    A Deep Neural Network Algorithm for Linear-Quadratic Portfolio Optimization with MGARCH and Small Transaction Costs

    Full text link
    We analyze a fixed-point algorithm for reinforcement learning (RL) of optimal portfolio mean-variance preferences in the setting of multivariate generalized autoregressive conditional-heteroskedasticity (MGARCH) with a small penalty on trading. A numerical solution is obtained using a neural network (NN) architecture within a recursive RL loop. A fixed-point theorem proves that NN approximation error has a big-oh bound that we can reduce by increasing the number of NN parameters. The functional form of the trading penalty has a parameter ϵ>0\epsilon>0 that controls the magnitude of transaction costs. When ϵ\epsilon is small, we can implement an NN algorithm based on the expansion of the solution in powers of ϵ\epsilon. This expansion has a base term equal to a myopic solution with an explicit form, and a first-order correction term that we compute in the RL loop. Our expansion-based algorithm is stable, allows for fast computation, and outputs a solution that shows positive testing performance

    Inventory strategies for patented and generic products for a pharmaceutical supply chain

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 76-77).This thesis presents a model to determine safety stock considering the distinct planning parameters for a pharmaceutical company. Traditional parameters such as forecast accuracy, service level requirements and average lead-time are combined with a nontraditional upstream uncertainty parameter defined as supply reliability. In this instance, supply reliability measures uncertainty in the supply quantity delivered rather than variability in the lead-time for delivery. We consider the impact of the safety stock using two products: a proprietary product that is patented and a generic product that recently went off patent. Sensitivity analysis is performed to provide insights on the impact of variations in input parameters. The study shows that there is a significant difference in safety stock between the proposed model and the current model used by the company.by Prashanth Krishnamurthy and Amit Prasad.M.Eng.in Logistic

    Privacy-Preserving Collaborative Learning through Feature Extraction

    Full text link
    We propose a framework in which multiple entities collaborate to build a machine learning model while preserving privacy of their data. The approach utilizes feature embeddings from shared/per-entity feature extractors transforming data into a feature space for cooperation between entities. We propose two specific methods and compare them with a baseline method. In Shared Feature Extractor (SFE) Learning, the entities use a shared feature extractor to compute feature embeddings of samples. In Locally Trained Feature Extractor (LTFE) Learning, each entity uses a separate feature extractor and models are trained using concatenated features from all entities. As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data. Secure multi-party algorithms are utilized to train models without revealing data or features in plain text. We investigate the trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage (using an off-the-shelf membership inference attack), and computational cost. LTFE provides the most privacy, followed by SFE, and then CTFE. Computational cost is lowest for SFE and the relative speed of CTFE and LTFE depends on network architecture. CTFE and LTFE provide the best accuracy. We use MNIST, a synthetic dataset, and a credit card fraud detection dataset for evaluations

    LipSim: A Provably Robust Perceptual Similarity Metric

    Full text link
    Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human visual system. On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks. It is indeed logical to infer that perceptual metrics may inherit both the strengths and shortcomings of neural networks. In this work, we demonstrate the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks. We then propose a framework to train a robust perceptual similarity metric called LipSim (Lipschitz Similarity Metric) with provable guarantees. By leveraging 1-Lipschitz neural networks as the backbone, LipSim provides guarded areas around each data point and certificates for all perturbations within an 2\ell_2 ball. Finally, a comprehensive set of experiments shows the performance of LipSim in terms of natural and certified scores and on the image retrieval application. The code is available at https://github.com/SaraGhazanfari/LipSim
    corecore